71 research outputs found

    Real-time soft shadows using a single light sample

    Get PDF
    We present a real-time rendering algorithm that generates soft shadows of dynamic scenes using a single light sample. As a depth-map algorithm it can handle arbitrary shadowed surfaces. The shadow-casting surfaces, however, should satisfy a few geometric properties to prevent artifacts. Our algorithm is based on a bivariate attenuation function, whose result modulates the intensity of a light causing shadows. The first argument specifies the distance of the occluding point to the shadowed point; the second argument measures how deep the shadowed point is inside the shadow. The attenuation function can be implemented using dependent texture accesses; the complete implementation of the algorithm can be accelerated by today's graphics hardware. We outline the implementation, and discuss details of artifact prevention and filtering

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Analyzing feature implementation by visual exploration of architecturally-embedded call-graphs

    Get PDF
    ABSTRACT Maintenance, reengineering, and refactoring of large and complex software systems are commonly based on modifications and enhancements related to features. Before developers can modify feature functionality they have to locate the relevant code components and understand the components' interaction. In this paper, we present a prototype tool for analyzing feature implementation of large C/C++ software systems by visual exploration of dynamically extracted call relations between code components. The component interaction can be analyzed on various abstraction levels ranging from function interaction up to interaction of the system with shared libraries of the operating system. The user visually explores the component interaction within a multiview visualization system consisting of various textual and a graphical 3D landscape view. During exploration the 3D landscape view supports the user firstly in deciding early whether a call relation is essential for understanding the feature and, secondly, in finding starting points for fine-grained feature analysis using a top-down approach

    Large-Scale Evaluation of Topic Models and Dimensionality Reduction Methods for 2D Text Spatialization

    Full text link
    Topic models are a class of unsupervised learning algorithms for detecting the semantic structure within a text corpus. Together with a subsequent dimensionality reduction algorithm, topic models can be used for deriving spatializations for text corpora as two-dimensional scatter plots, reflecting semantic similarity between the documents and supporting corpus analysis. Although the choice of the topic model, the dimensionality reduction, and their underlying hyperparameters significantly impact the resulting layout, it is unknown which particular combinations result in high-quality layouts with respect to accuracy and perception metrics. To investigate the effectiveness of topic models and dimensionality reduction methods for the spatialization of corpora as two-dimensional scatter plots (or basis for landscape-type visualizations), we present a large-scale, benchmark-based computational evaluation. Our evaluation consists of (1) a set of corpora, (2) a set of layout algorithms that are combinations of topic models and dimensionality reductions, and (3) quality metrics for quantifying the resulting layout. The corpora are given as document-term matrices, and each document is assigned to a thematic class. The chosen metrics quantify the preservation of local and global properties and the perceptual effectiveness of the two-dimensional scatter plots. By evaluating the benchmark on a computing cluster, we derived a multivariate dataset with over 45 000 individual layouts and corresponding quality metrics. Based on the results, we propose guidelines for the effective design of text spatializations that are based on topic models and dimensionality reductions. As a main result, we show that interpretable topic models are beneficial for capturing the structure of text corpora. We furthermore recommend the use of t-SNE as a subsequent dimensionality reduction.Comment: To be published at IEEE VIS 2023 conferenc

    EUROGRAPHICS 2007 / P. Cignoni and J. Sochor Short Papers Automated Combination of Real-Time Shader Programs

    No full text
    This work proposes an approach for automatic and generic runtime-combination of high-level shader programs. Many of recently introduced real-time rendering techniques rely on such programs. The fact that only a single program can be active concurrently becomes a main conceptual problem when embedding these techniques into middleware systems or 3D applications. Their implementations frequently demand for a combined use of individual shader functionality and, therefore, need to combine existing shader programs. Such a task is often timeconsuming, error-prone, requires a skilled software engineer, and needs to be repeated for each further extension. Our extensible approach solves these problems efficiently: It structures a shader program into code fragments, each typed with a predefined semantics. Based on an explicit order of those semantics, the code fragments of different programs can be combined at runtime. This technique facilitates the reuse of shader code as well as the development of extensible rendering frameworks for future hardware generations. We integrated our approach into an object-oriented high-level rendering system
    corecore